Clustering and Classification

The list of materials and links related to clustering and classification can be found below.
course slides by Emma Kämäräinen
DataCamp exercise

RStudio Exercise 4

After solving the DataCamp exercise and going through the embedded links, I got a general overview on the topic. In the following sections, I will prepare a report based on the exercise instructions. Unlike earlier weeks, the data wrangling exercise will be done after the data analysis part. In fact, the data wrangling exercise is part of Dimensionality Reduction Techniques. In the following section, I will explain about clustering and classification of data sets using open data called Boston that belongs to MASS package.

Data

First and foremost, it is important to get an overview of the data being analysed. As mentioned earlier, Boston data from MASS package.

library(MASS)
data(Boston)
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14

The Boston data was collected to study the housing values in the suburbs of Boston. The table contains 506 observations for 14 different variables. The descriptions for each of the 14 variables are listed below.

Variables Description
crim per capita crime rate by town.
zn proportion of residential land zoned for lots over 25,000 sq.ft.
indus proportion of non-retail business acres per town.
chas Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
nox nitrogen oxides concentration (parts per 10 million).
rm average number of rooms per dwelling.
age proportion of owner-occupied units built prior to 1940.
dis weighted mean of distances to five Boston employment centres.
rad index of accessibility to radial highways.
tax full-value property-tax rate per $10,000.
ptratio pupil-teacher ratio by town.
black 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
lstat lower status of the population (percent).
medv median value of owner-occupied homes in $1000s.

Data Summary

Now, let’s look at the summary of the boston data in the form of table (instead of default layout) using pandoc.table function of pander package.

library(pander)
pandoc.table(summary(Boston), caption = "Summary of Boston data", split.table = 120)
## 
## -----------------------------------------------------------------------------------------------------------------------
##        crim               zn             indus            chas              nox              rm              age       
## ------------------ ---------------- --------------- ----------------- ---------------- --------------- ----------------
##  Min.  : 0.00632     Min.  : 0.00    Min.  : 0.46    Min.  :0.00000    Min.  :0.3850    Min.  :3.561     Min.  : 2.90  
## 
##  1st Qu.: 0.08204   1st Qu.: 0.00    1st Qu.: 5.19   1st Qu.:0.00000   1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02 
## 
##  Median : 0.25651   Median : 0.00    Median : 9.69   Median :0.00000   Median :0.5380   Median :6.208   Median : 77.50 
## 
##   Mean : 3.61352     Mean : 11.36     Mean :11.14     Mean :0.06917     Mean :0.5547     Mean :6.285     Mean : 68.57  
## 
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000   3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08 
## 
##  Max.  :88.97620    Max.  :100.00    Max.  :27.74    Max.  :1.00000    Max.  :0.8710    Max.  :8.780    Max.  :100.00  
## -----------------------------------------------------------------------------------------------------------------------
## 
## Table: Summary of Boston data (continued below)
## 
##  
## ------------------------------------------------------------------------------------------------------------------
##       dis              rad              tax           ptratio          black            lstat           medv      
## ---------------- ---------------- --------------- --------------- ---------------- --------------- ---------------
##  Min.  : 1.130    Min.  : 1.000    Min.  :187.0    Min.  :12.60     Min.  : 0.32    Min.  : 1.73    Min.  : 5.00  
## 
##  1st Qu.: 2.100   1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38   1st Qu.: 6.95   1st Qu.:17.02 
## 
##  Median : 3.207   Median : 5.000   Median :330.0   Median :19.05   Median :391.44   Median :11.36   Median :21.20 
## 
##   Mean : 3.795     Mean : 9.549     Mean :408.2     Mean :18.46     Mean :356.67     Mean :12.65     Mean :22.53  
## 
##  3rd Qu.: 5.188   3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23   3rd Qu.:16.95   3rd Qu.:25.00 
## 
##  Max.  :12.127    Max.  :24.000    Max.  :711.0    Max.  :22.00    Max.  :396.90    Max.  :37.97    Max.  :50.00  
## ------------------------------------------------------------------------------------------------------------------

After getting a statistical summary of, it’s worthwhile to see to what extent each variables are correlated. For that, we use corr() function on Boston data.

library(corrplot)
## corrplot 0.84 loaded
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:MASS':
## 
##     select
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
corr_boston<-cor(Boston) %>% round(2)
pandoc.table(corr_boston, split.table = 120)
## 
## -------------------------------------------------------------------------------------------------------------------------------
##    &nbsp;      crim     zn     indus   chas     nox     rm      age     dis     rad     tax    ptratio   black   lstat   medv  
## ------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- --------- ------- ------- -------
##   **crim**       1     -0.2    0.41    -0.06   0.42    -0.22   0.35    -0.38   0.63    0.58     0.29     -0.39   0.46    -0.39 
## 
##    **zn**      -0.2      1     -0.53   -0.04   -0.52   0.31    -0.57   0.66    -0.31   -0.31    -0.39    0.18    -0.41   0.36  
## 
##   **indus**    0.41    -0.53     1     0.06    0.76    -0.39   0.64    -0.71    0.6    0.72     0.38     -0.36    0.6    -0.48 
## 
##   **chas**     -0.06   -0.04   0.06      1     0.09    0.09    0.09    -0.1    -0.01   -0.04    -0.12    0.05    -0.05   0.18  
## 
##    **nox**     0.42    -0.52   0.76    0.09      1     -0.3    0.73    -0.77   0.61    0.67     0.19     -0.38   0.59    -0.43 
## 
##    **rm**      -0.22   0.31    -0.39   0.09    -0.3      1     -0.24   0.21    -0.21   -0.29    -0.36    0.13    -0.61    0.7  
## 
##    **age**     0.35    -0.57   0.64    0.09    0.73    -0.24     1     -0.75   0.46    0.51     0.26     -0.27    0.6    -0.38 
## 
##    **dis**     -0.38   0.66    -0.71   -0.1    -0.77   0.21    -0.75     1     -0.49   -0.53    -0.23    0.29    -0.5    0.25  
## 
##    **rad**     0.63    -0.31    0.6    -0.01   0.61    -0.21   0.46    -0.49     1     0.91     0.46     -0.44   0.49    -0.38 
## 
##    **tax**     0.58    -0.31   0.72    -0.04   0.67    -0.29   0.51    -0.53   0.91      1      0.46     -0.44   0.54    -0.47 
## 
##  **ptratio**   0.29    -0.39   0.38    -0.12   0.19    -0.36   0.26    -0.23   0.46    0.46       1      -0.18   0.37    -0.51 
## 
##   **black**    -0.39   0.18    -0.36   0.05    -0.38   0.13    -0.27   0.29    -0.44   -0.44    -0.18      1     -0.37   0.33  
## 
##   **lstat**    0.46    -0.41    0.6    -0.05   0.59    -0.61    0.6    -0.5    0.49    0.54     0.37     -0.37     1     -0.74 
## 
##   **medv**     -0.39   0.36    -0.48   0.18    -0.43    0.7    -0.38   0.25    -0.38   -0.47    -0.51    0.33    -0.74     1   
## -------------------------------------------------------------------------------------------------------------------------------

The table above shows the correlation matrix of all variables. Bird’s eye view on the matrix shows that tax (full-value property-tax rate) and rad (index of accessibility to radial highways) are the most positively correlated variables, whereas dis (weighted mean of distances to five Boston employment centres) and age (proportion of owner-occupied units built prior to 1940) are the most negatively correlated variables. Moreover, chas (Charles river dummy variable) and rad are the two variables that are least correlated.

The same information can be presented as a graphical overview. This time we will make a correlogram, a graphical representation of coorelation matrix. The corrplot() function of corrplot package wll be used to visualize the correlation between all the variables of the Boston dataset.

corrplot(corr_boston, method = "circle", tl.col = "black", cl.pos="b", tl.pos = "d", type = "upper" , tl.cex = 0.9 )

The above graph gives much quicker impression on which variables are more correlated to each other. In the graph, positive correlations are displayed in blue and negative correlations in red color with intensity of the color and circle size being proportional to the correlation coefficients. The same relationship as described above using correlation summary can be seen in the form of circles with different size (intensity of correlation i.e highly correlated or lowly correlated) and different colors (wheether positively or negatively correlated).

Data Standardization

Data scaling is useful for linear discriminant analysis. The scale() function will be used to scale the whole data. Here, the scaled value is generated by subtracting the column means from corresponding columns and then the difference is divided by standard deviation. i.e scaled(x)=(x-mean(x))/sd(x).

boston_scaled<-scale(Boston)
pandoc.table(summary(boston_scaled), caption = "Summary of  Scaled Boston data", split.table = 120)
## 
## --------------------------------------------------------------------------------------------------------------
##        crim                 zn               indus             chas               nox               rm        
## ------------------- ------------------ ----------------- ----------------- ----------------- -----------------
##  Min.  :-0.419367    Min.  :-0.48724    Min.  :-1.5563    Min.  :-0.2723    Min.  :-1.4644    Min.  :-3.8764  
## 
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681 
## 
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441   Median :-0.1084 
## 
##   Mean : 0.000000     Mean : 0.00000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000  
## 
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823 
## 
##  Max.  : 9.924110    Max.  : 3.80047    Max.  : 2.4202    Max.  : 3.6648    Max.  : 2.7296    Max.  : 3.5515  
## --------------------------------------------------------------------------------------------------------------
## 
## Table: Summary of  Scaled Boston data (continued below)
## 
##  
## -----------------------------------------------------------------------------------------------------------
##        age               dis               rad               tax             ptratio            black      
## ----------------- ----------------- ----------------- ----------------- ----------------- -----------------
##  Min.  :-2.3331    Min.  :-1.2658    Min.  :-0.9819    Min.  :-1.3127    Min.  :-2.7047    Min.  :-3.9033  
## 
##  1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049 
## 
##  Median : 0.3171   Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808 
## 
##   Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000  
## 
##  3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332 
## 
##  Max.  : 1.1164    Max.  : 3.9566    Max.  : 1.6596    Max.  : 1.7964    Max.  : 1.6372    Max.  : 0.4406  
## -----------------------------------------------------------------------------------------------------------
## 
## Table: Table continues below
## 
##  
## -----------------------------------
##       lstat             medv       
## ----------------- -----------------
##  Min.  :-1.5296    Min.  :-1.9063  
## 
##  1st Qu.:-0.7986   1st Qu.:-0.5989 
## 
##  Median :-0.1811   Median :-0.1449 
## 
##   Mean : 0.0000     Mean : 0.0000  
## 
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683 
## 
##  Max.  : 3.5453    Max.  : 2.9865  
## -----------------------------------
#corr_bostons<-cor(boston_scaled) %>% round(2)
#pandoc.table(corr_bostons, split.table = 120)

We can make important observations on the summary of scaled data. The summary of the scaled Boston data has changed from the non-scaled Boston data. Most importantly, all the mean values have become zero and other values such as minimum, maximum, median and quartiles (1st and 3rd) are also changed for all variables.

Next, we will create quantile vector for crime using quantile function on scaled dataframe of Boston dataset. The quantile vectors will be labeled with meaningful labels to explain the intensity of crime i.e low, medium low, medium high and high. Lastly, we will replace the Crim variable with newly created crime variable and create the required data frame.

boston_scaled<- data.frame(boston_scaled)
qvc<-quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = qvc, label = c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled<-data.frame(boston_scaled, crime)
#table(boston_scaled$crime)

After creating the customized dataset in earlier steps, we will now divide it into training and testing sets where 80% of the data will belong to training set and 20% will be used as testing set.

#library(MASS)
n<-nrow(boston_scaled)
ind <- sample(n, size = n*0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]

Now, as we have categorized the dataset into training and test set, we can fit linear discriminant analysis on the training set, where crime rate will be predicated based on all other variables.

Linear Discriminant Analysis

lda.fit <- lda(crime ~ ., data = train)
#add biplot arrows to an lda
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col=classes)
lda.arrows(lda.fit, myscale = 2)

# target classes as numeric
#classes <- as.numeric(train$crime)

# plot the lda results
#plot(lda.fit, dimen = 2, col = classes, pch = classes)

Based on the bi-plot, it can be seen that rad variable alone acts as a predictor of high crime rate in the Boston data. On the other hand, the remaining 12 variables are associated with low, medium low and medium high rate of crime. The grouping based on 12 variables is fuzzy and is difficult to classify if any of the variables can classify the associated observations.

Class Prediction

crime_cat<-test$crime
test<-dplyr::select(test, -crime)
lda.pred<-predict(lda.fit, newdata = test)
table(correct = crime_cat, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       12      14        0    0
##   med_low    4      16        7    0
##   med_high   0      12       11    1
##   high       0       0        1   24

I tried to grasp the concept of the above matrix, which is also referred to as confusion matrix going through this blog. Everytime the matrix is generated, the number of correct and predicted cases for each of the classes (low, med_low, med_high, high) changes. The change is expected because of the randomized classification of test and training set. However, it was also observed that prediction for the high class fluctuated much lesser than the other classes.

K-means Clustering

In order to practice K-means clustering, we will reload the Boston data, scale the data and calculate the distances between the observations.

data(Boston)
boston_scaled1<-as.data.frame(scale(Boston))
dist_eu<-dist(boston_scaled1)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
#head(boston_scaled1)

We will use the scaled Boston data to perform K-means clustering. It’s always not trivial beforehand to identify how many clusters can classify our data. Therefore, we need to first randomly use certain number of clusters (if we can get any idea from the summary of the data or graphical summaries) but there are few other ways we can identify right number of clusters. This topics is more or less inspired by this R-blogger post and this Stackoverflow question

First we start with random cluster number. Let’s start with k=4 and apply k-means on the data.

#Let us apply kmeans for k=4 clusters 
kmm = kmeans(boston_scaled1,6,nstart = 50 ,iter.max = 15) #we keep number of iter.max=15 to ensure the algorithm converges and nstart=50 to ensure that atleat 50 random sets are choosen  

Elbow method is also one of the well known techniques that can be used to estimate number of clusters.

#Elbow Method for finding the optimal number of clusters
library(ggplot2)
set.seed(1234)
# Compute and plot wss for k = 2 to k = 15.
k.max <- 15
data <- boston_scaled1
wss <- sapply(1:k.max, 
              function(k){kmeans(data, k)$tot.withinss})
#wss
qplot(1:k.max, wss, geom = c("point", "line"), span = 0.2,
     xlab="Number of clusters K",
     ylab="Total within-clusters sum of squares")
## Warning: Ignoring unknown parameters: span

## Warning: Ignoring unknown parameters: span

Somehow the elbow plot shows that we may not see more than two clear clusters but it’s always nice to confirm such predictions using one more method because there is not shortage of methods for a number of analyses such as this. Therefore, we will additionally use NbClust package.

library(NbClust)
nb <- NbClust(boston_scaled1, diss=NULL, distance = "euclidean", 
              min.nc=2, max.nc=5, method = "kmeans", 
              index = "all", alphaBeale = 0.1)

## *** : The Hubert index is a graphical method of determining the number of clusters.
##                 In the plot of Hubert index, we seek a significant knee that corresponds to a 
##                 significant increase of the value of the measure i.e the significant peak in Hubert
##                 index second differences plot. 
## 

## *** : The D index is a graphical method of determining the number of clusters. 
##                 In the plot of D index, we seek a significant knee (the significant peak in Dindex
##                 second differences plot) that corresponds to a significant increase of the value of
##                 the measure. 
##  
## ******************************************************************* 
## * Among all indices:                                                
## * 12 proposed 2 as the best number of clusters 
## * 6 proposed 3 as the best number of clusters 
## * 3 proposed 4 as the best number of clusters 
## * 3 proposed 5 as the best number of clusters 
## 
##                    ***** Conclusion *****                            
##  
## * According to the majority rule, the best number of clusters is  2 
##  
##  
## *******************************************************************
#hist(nb$Best.nc[1,], breaks = max(na.omit(nb$Best.nc[1,])))

Now, it’s much clearer that the data is described better with two clusters. With that, we run k-means algorithm again.

#Let us apply kmeans for k=4 clusters 
km_final = kmeans(boston_scaled1, centers = 2) #we keep number of iter.max=15 to ensure the algorithm converges and nstart=50 to ensure that atleat 50 random sets are choosen  
pairs(boston_scaled1[3:9], col=km_final$cluster)

The clusters in the above plot are divided into two groups and represented by two colors - red and black. Some of the pairs are better grouped than others in the plot. One of the important observations can be made with chas variable where the observations in all the pairs formed by it are wrongly clustered. On the other hand, clusters formed by rad variable are better separated.

More LDA
In the following section, we will use random cluster number (k=6) and perform LDA. We follow the the basic steps of scaling and distance calculation. Finally we will see how the biplot looks like on the whole data set when we try to group them into six categories.

boston_scaled2<-as.data.frame(scale(Boston))
#head(boston_scaled2)
set.seed(1234)
km_bs2<-kmeans(dist_eu, centers = 6)
#head(km_bs2)
myclust<-data.frame(km_bs2$cluster)
boston_scaled2$clust<-km_bs2$cluster
#head(boston_scaled2)
lda.fit_bs2<-lda(clust~., data = boston_scaled2 )
lda.fit_bs2
## Call:
## lda(clust ~ ., data = boston_scaled2)
## 
## Prior probabilities of groups:
##          1          2          3          4          5          6 
## 0.10079051 0.19960474 0.09486166 0.20553360 0.12845850 0.27075099 
## 
## Group means:
##         crim          zn        indus       chas         nox          rm
## 1 -0.4149170  2.55535505 -1.228758914 -0.1951310 -1.21919439  0.78676843
## 2  0.3880377 -0.48724019  1.165421314 -0.2723291  0.98659851 -0.28553884
## 3 -0.3613809 -0.09419977 -0.474086929  1.5321752 -0.12487357  1.27068222
## 4 -0.3580718 -0.46023584 -0.003188584 -0.2723291 -0.09478548 -0.35414265
## 5  1.4172264 -0.48724019  1.069802298  0.4545202  1.34622349 -0.73713928
## 6 -0.4055840  0.02149547 -0.740804469 -0.2723291 -0.79649957  0.09099544
##          age        dis        rad        tax    ptratio       black
## 1 -1.4488239  1.7464736 -0.7048880 -0.5692695 -0.8353442  0.34924852
## 2  0.7651453 -0.7898745  1.1388129  1.2431405  0.6932747  0.04498348
## 3  0.2307707 -0.3386056 -0.4961654 -0.7220694 -1.1226766  0.32813467
## 4  0.4093998 -0.2612071 -0.5865335 -0.4342609  0.2608189  0.19191309
## 5  0.8557425 -0.9615698  1.2885597  1.2934457  0.4142248 -1.68787016
## 6 -0.8223904  0.7053125 -0.5694290 -0.7355910 -0.2013102  0.37698635
##        lstat       medv
## 1 -0.9773530  0.8760790
## 2  0.6734731 -0.5987824
## 3 -0.6138415  1.4407282
## 4  0.1508360 -0.2838601
## 5  1.1961180 -0.8078336
## 6 -0.5996059  0.2092896
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3         LD4         LD5
## crim     0.04811996 -0.28556378 -0.55488255  0.49400398  0.05329096
## zn      -0.13738829 -1.83004313  0.34546140 -0.26802062 -0.87758918
## indus    0.74925386 -0.10015651  0.61607026 -0.42031079  0.25109137
## chas     0.13287282 -0.13228082 -0.94523359 -0.16829634  0.04786106
## nox      1.21764057 -0.81216848 -0.12506389  0.27633410  0.13213424
## rm      -0.12060003 -0.04058521 -0.02502279 -0.75468374  0.21331834
## age      0.17397462  0.34382124 -0.07430813 -0.37956005 -0.95205471
## dis     -0.36273454 -0.54652248  0.11546588  0.26210162  0.59195828
## rad      0.61453519  0.40958433  0.29006265 -0.40963042  1.56473994
## tax      0.75124298 -1.03741454  0.22707980 -0.17126395 -0.61781814
## ptratio  0.36217649 -0.18603253  0.30060517  0.16017164 -0.53729844
## black   -0.27542772  0.27016025  0.77143821 -0.87012879  0.23445845
## lstat    0.48988940 -0.40861927 -0.53017288 -0.23295699 -0.06758426
## medv     0.22977036 -0.57759705 -0.86635437 -0.06977308 -0.10361245
## 
## Proportion of trace:
##    LD1    LD2    LD3    LD4    LD5 
## 0.7285 0.1498 0.0750 0.0298 0.0168
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
plot(lda.fit_bs2, dimen = 2)
lda.arrows(lda.fit_bs2, myscale = 3)

I must admit that the number of clusters I chose was more than needed. I believe three to four clusters could group the whole data set. The top three most influential variables according to bi-plot are zn, nox and tax.

Better ways to visualize LDA

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crim)
#Second 3D plot where colors are defined by clusters of k-means
#k-means_matpro<-kmeans(matrix_product, )
#head(train)
#train$cl<-myclust
#boston_scaled2$cl<-myclust
#head(boston_scaled2)
#head(train)
#rownames(train)
#rownames(boston_scaled2)
train$cl <- boston_scaled2$clust[match(rownames(train), rownames(boston_scaled2))]
#head(train)
#nrow(train)

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type = "scatter3d", mode="markers", color = train$cl)

According to my observation, clustering based on K-means have turned out to be more informative than the one based on crime classes.

Additional links (also included in the course slides)
Blog post by Jason Browniee on LDA
R-bloggers post on LDA R-bloggers post on K Means Clustering in R